Minimax Distribution Estimation in Wasserstein Distance

نویسندگان

  • Shashank Singh
  • Barnabás Póczos
چکیده

The Wasserstein metric is an important measure of distance between probability distributions, with several applications in machine learning, statistics, probability theory, and data analysis. In this paper, we upper and lower bound minimax rates for the problem of estimating a probability distribution underWasserstein loss, in terms of metric properties, such as covering and packing numbers, of the underlying sample space.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Minimax rates of convergence for Wasserstein deconvolution with supersmooth errors in any dimension

The subject of this paper is the estimation of a probability measure on R from data observed with an additive noise, under the Wasserstein metric of order p (with p ≥ 1). We assume that the distribution of the errors is known and belongs to a class of supersmooth distributions, and we give optimal rates of convergence for the Wasserstein metric of order p. In particular, we show how to use the ...

متن کامل

Minimax Statistical Learning and Domain Adaptation with Wasserstein Distances

As opposed to standard empirical risk minimization (ERM), distributionally robust optimization aims to minimize the worst-case risk over a larger ambiguity set containing the original empirical distribution of the training data. In this work, we describe a minimax framework for statistical learning with ambiguity sets given by balls in Wasserstein space. In particular, we prove a generalization...

متن کامل

On strong identifiability and optimal rates of parameter estimation in finite mixtures

Abstract: This paper studies identifiability and convergence behaviors for parameters of multiple types, including matrix-variate ones, that arise in finite mixtures, and the effects of model fitting with extra mixing components. We consider several notions of strong identifiability in a matrix-variate setting, and use them to establish sharp inequalities relating the distance of mixture densit...

متن کامل

On strong identifiability and convergence rates of parameter estimation in finite mixtures

Abstract: This paper studies identifiability and convergence behaviors for parameters of multiple types, including matrix-variate ones, that arise in finite mixtures, and the effects of model fitting with extra mixing components. We consider several notions of strong identifiability in a matrix-variate setting, and use them to establish sharp inequalities relating the distance of mixture densit...

متن کامل

Minimax Estimation of the Scale Parameter in a Family of Transformed Chi-Square Distributions under Asymmetric Squared Log Error and MLINEX Loss Functions

This paper is concerned with the problem of finding the minimax estimators of the scale parameter ? in a family of transformed chi-square distributions, under asymmetric squared log error (SLE) and modified linear exponential (MLINEX) loss functions, using the Lehmann Theorem [2]. Also we show that the results of Podder et al. [4] for Pareto distribution are a special case of our results for th...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • CoRR

دوره abs/1802.08855  شماره 

صفحات  -

تاریخ انتشار 2018